6 research outputs found

    Evaluation of Accelerometer-Based Walking-Turn Features for Fall-Risk Assessment in Older Adults

    Get PDF
    Falls in older adult populations are a serious health concern, resulting in physical and psychological trauma in addition to increased pressure on healthcare systems. Faller classification and fall risk assessment in elderly populations can facilitate preventative care before a fall occurs. Few research studies in the fall risk assessment field have focused on wearable-sensor-based features obtained during walking-turns. Examining turn based features may improve fall-risk assessment techniques. Seventy-six older individuals (74.15 ± 7.0 years), categorized as prospective fallers (28 participants) and non-fallers (43 participants), completed a six-minute walk test with accelerometers attached to their lower legs and pelvis. Turn and straight walking sections were segmented from the six-minute walk test, with a feature set extracted for each participant. This work aimed to determine if significant differences between prospective faller (PF) and non-faller (NF) groups existed for turn or straight walking features. A mixed-design ANOVA with post-hoc analysis showed no significant differences between faller groups for straight-walking features, while five turn based features had significant differences (p <0.05). These five turn based features were minimum of anterior-posterior REOH for right shank, SD of SD anterior left shank acceleration, SD of mean anterior left shank acceleration, maximum of medial-lateral FQFFT for lower back, and SD of maximum anterior left shank acceleration. Turn based features merit further investigation for distinguishing PF and NF. A novel prospective faller classification method was developed using accelerometer-based features from turns and straight walking. Cross validation was conducted for both turn and straight feature based models to assess classification performance. The best “classifier model – feature selector” combination used turn data, random forest classifier, and select-5-best feature selector (73.4% accuracy, 60.5% sensitivity, 82.0% specificity, 0.44 Matthew’s Correlation Coefficient (MCC)). Using only the most frequently occurring features, a feature subset achieved better classification results, with 77.3% accuracy, 66.1% sensitivity, 84.7% specificity, and 0.52 MCC score (minimum of anterior-posterior ratio of even/odd harmonics for right shank, standard deviation (SD) of anterior left shank acceleration SD, SD of mean anterior left shank acceleration, maximum of medial-lateral first quartile of Fourier transform (FQFFT) for lower back, maximum of anterior-posterior FQFFT for lower back). All classification performance metrics improved when turn data was used for faller classification, compared to straight walking data

    Unsupervised 3D Pose Estimation with Geometric Self-Supervision

    Full text link
    We present an unsupervised learning approach to recover 3D human pose from 2D skeletal joints extracted from a single image. Our method does not require any multi-view image data, 3D skeletons, correspondences between 2D-3D points, or use previously learned 3D priors during training. A lifting network accepts 2D landmarks as inputs and generates a corresponding 3D skeleton estimate. During training, the recovered 3D skeleton is reprojected on random camera viewpoints to generate new "synthetic" 2D poses. By lifting the synthetic 2D poses back to 3D and re-projecting them in the original camera view, we can define self-consistency loss both in 3D and in 2D. The training can thus be self supervised by exploiting the geometric self-consistency of the lift-reproject-lift process. We show that self-consistency alone is not sufficient to generate realistic skeletons, however adding a 2D pose discriminator enables the lifter to output valid 3D poses. Additionally, to learn from 2D poses "in the wild", we train an unsupervised 2D domain adapter network to allow for an expansion of 2D data. This improves results and demonstrates the usefulness of 2D pose data for unsupervised 3D lifting. Results on Human3.6M dataset for 3D human pose estimation demonstrate that our approach improves upon the previous unsupervised methods by 30% and outperforms many weakly supervised approaches that explicitly use 3D data

    Faller Classification in Older Adults Using Wearable Sensors Based on Turn and Straight-Walking Accelerometer-Based Features

    No full text
    Faller classification in elderly populations can facilitate preventative care before a fall occurs. A novel wearable-sensor based faller classification method for the elderly was developed using accelerometer-based features from straight walking and turns. Seventy-six older individuals (74.15 ± 7.0 years), categorized as prospective fallers and non-fallers, completed a six-minute walk test with accelerometers attached to their lower legs and pelvis. After segmenting straight and turn sections, cross validation tests were conducted on straight and turn walking features to assess classification performance. The best “classifier model—feature selector” combination used turn data, random forest classifier, and select-5-best feature selector (73.4% accuracy, 60.5% sensitivity, 82.0% specificity, and 0.44 Matthew’s Correlation Coefficient (MCC)). Using only the most frequently occurring features, a feature subset (minimum of anterior-posterior ratio of even/odd harmonics for right shank, standard deviation (SD) of anterior left shank acceleration SD, SD of mean anterior left shank acceleration, maximum of medial-lateral first quartile of Fourier transform (FQFFT) for lower back, maximum of anterior-posterior FQFFT for lower back) achieved better classification results, with 77.3% accuracy, 66.1% sensitivity, 84.7% specificity, and 0.52 MCC score. All classification performance metrics improved when turn data was used for faller classification, compared to straight walking data. Combining turn and straight walking features decreased performance metrics compared to turn features for similar classifier model—feature selector combinations

    Abstracts

    No full text
    corecore